A new framework for knowledge revision of abductive agents through their interaction (preliminary report)

نویسندگان

  • Andrea Bracciali
  • Paolo Torroni
چکیده

The aim of this work is the design of a framework for the revision of knowledge in abductive reasoning agents, based on interaction. We address issues such as: how to exploit knowledge multiplicity to find solutions to problems that agents could not individually solve, what information must be passed or requested, how can agents take advantage from the answers that they obtain, and how can they revise their reasoning process as a consequence of interacting with each other. In this preliminary report, we describe a novel negotiation framework in which agents will exchange not only abductive hypotheses but also metaknowledge, such as their own integrity constraints. Besides, we formalise some aspects of such a framework, by introducing an algebra of integrity constraints, aimed at formally supporting the updating/revising process of the agent knowledge. 1 Multiple-source knowledge and coordinated reasoning The agent metaphor has recently become a very popular way to model distributed systems, in many application domains that require a goal directed behaviour of autonomous entities. Thanks also to the recent explosion of the Internet and communication networks, the increased accessibility of knowledge located in different sources at a relatively low cost is opening up interesting scenarios where communication and knowledge sharing can be a constant support to the reasoning activity of agents. In knowledge-intensive applications, the agent paradigm will be able to enhance traditional stand-alone expert systems interacting with end-users, by allowing for inter-agent communication and autonomous revision of knowledge. Some agent-based solutions can be already found in areas such as information and knowledge integration (see the Sage and Find Future projects by Fujitsu), ? This work is partially funded by the Information Society Technologies programme of the European Commission under the IST-2001-32530 SOCS Project [1]. A new framework for knowledge revision of abductive agents 137 Business Process Management (Agentis Software), the Oracle Intelligent Agents, not to mention decentralized control and scheduling, and e-procurement (Rockwell Automation, Living Systems AG, Lost Wax, iSOCO), just to cite some. In order to make such solutions reliable, easy to control, to specify and verify, and in order to make their behaviour easy to understand, sound and formal foundations are needed. For this reason, recent work in logic programming considers multi-agent systems as an interesting and powerful paradigm. Work done by Kowalski and Sadri [2] on the agent cycle, by Leite et al. [3] on combining several Non-Monotonic Reasoning mechanisms in agents, by Satoh et al. [4] on speculative computation, by Dell’Acqua et al. [5] on agent communication and updates, by Sadri et al. [6] on agent dialogues, and by Ciampolini et al. [7] on the coordination of reasoning of abductive logic agents, are only some examples of application of Logic Programming techniques to Multi-Agent Systems. A common characteristic among them is that the agent paradigm brings about the need for dealing with knowledge incompleteness (due to the multiplicity and autonomy of agents), and evolution (due to their interactions). In this research effort, many proposals have been put forward that consider negotiation and dialogue a suitable way to let agents exchange information and solve problems in a collaborative way, and that consider abduction as a privileged form of reasoning under incomplete information. However, such information exchange is often limited to simple facts that help agents revise their beliefs. In [7], for instance, such facts are modelled as hypotheses made to explain some observation in a coordinated abductive reasoning activity; in [4] the information exchanged takes the form of answers to questions aimed at confirming/disconfirming assumptions; in [6] of communication acts in a negotiation setting aimed at sharing resources. [5] and previous work of the same authors present a combination of abduction and updates in a multi-agent setting, where agents are able to propose updates to the theory of each other in different patterns. In this scenario of collaboration among abductive agents, we envisage a framework in which agents are able to exchange knowledge in various ways. Our idea is to define a framework where agents can exchange information in the form of predicates, theories, integrity constraints, and they do it as a result of a negotiation process. Negotiation is therefore about knowledge. Agents will be able to revise their own constraints, for example by relaxing or tightening them. In this way, the revision mechanism strongly exploits the interaction between agents. This paper describes preliminary work, where we focus especially on information exchange about integrity constraints. In doing so, we abstract away from issues such as ontology and communication languages and protocols, and we assume that all agents have a common ontology, and that they communicate by using the same language. Agents will actively ask for pieces of knowledge, let them be facts, hypotheses or integrity constraints, and agents will autonomously 1 A collection of excerpts of papers and web pages about industrial applications of agent technology, including references to the above mentioned projects and applications, can be downloaded from the address: http://lia.deis.unibo.it/~pt/misc/ AIIA03-review.pdf. 138 Andrea Bracciali and Paolo Torroni decide whether and how to modify their own constraints whenever it is needed. For instance, an agent, which is unable to explain some observation given its current knowledge, will try to collect information from other agents, in the forms mentioned above, and possibly decide to relax its own constraints in a way that allows him to explain such an observation. Conversely, an agent may find out that some assumptions that he made ought to be inconsistent (for instance, due to social constraints), and try to gather information about how to tighten its own constraints or add new ones which prevent him from making such assumptions. The distinguishing features of the distributed reasoning revision methodology that we envisage consist of a mix of introspection capabilities and communication capabilities. In the style of abductive reasoning, agents are able to provide conditional explanations about the facts that they prove, they are able to communicate such explanations in order to let others validate them, and they are able to single out and communicate the constraints that prevent from or allow them to explain an observation. Finally, agents are able to revise their constraints, according to those that may be proposed by others. Below, we informally present a possible interaction among two agents which leads them to modify their knowledge in different ways. This example will be elaborated later, after that the necessary notation is introduced. Example 1. Let us consider an interaction among two agents, A and B, having different expertise about a given topic. (1) A 6|= ¬f, b Agent A is unable to prove (find an explanation for) the goal (observation) “there is a bird that does not f ly”. . . (2) A → B : ¬f, b . . . hence it asks agent B for an explanation . . . (3) B → A : ¬f, b ∆ . . . which B returns, as a set ∆ of assumptions which explain ¬f, b (e.g., ∆ = {p}: a penguin is a bird that does not fly); (4) A → B : IC∆ A, driven by the assumptions suggested by B, is able to determine a (significant) set IC∆ of constraints that prevented him from assuming {¬f, b, p}, and therefore from proving the goal (e.g., {¬f, b → false}: it is not possible that a bird does not fly). A will communicate to B the set IC∆; (5) B → A : IC∆ B is able to find a set IC∆ that relaxes the original constraints IC∆ and has been used in the proof for the goal by B, e.g. {¬f, b,¬p → false} (it is not possible that a bird does not fly, unless it is a penguin). B proposes IC∆ to A; (6) AaIC∆⊕IC∆ |= ¬f, b A revises its constraints with those proposed by B, by means of some updating operations, and it is able now to prove its goal. This brief example hides several non-trivial steps that involve deduction, introspection, interaction, and revision. With this aim, we initially discuss these aspects to illustrate the global picture, and then we focus on the integrity conA new framework for knowledge revision of abductive agents 139 straint revision process (understood within the overall framework). The longterm goal of our research is the definition of the complete framework for distributed revision of agent knowledge. The rest of this paper is organised as follows: Section 2 introduces Abductive Logic Programming, Section 3 discusses the above mentioned open points. Section 4 presents the formal model we devised to address these open points; there we define an algebra of constraints, constraint selection operators allowing one to determine which constraints are relevant in a proof, and constraint updating/revising operators. Possible applications of the framework are illustrated in Section 5. Concluding remarks and future work are summarised in Section 6. 2 Background on Abductive Logic Programming In this section we give some background on Abductive Logic Programming and we introduce some notation about abductive agents. An Abductive Logic Program (ALP) is a triple 〈T,A, IC〉, where T is a theory (a set of predicate definitions), A is a set of predicates that we call “abducibles”, and IC is a set of integrity constraints. Given an ALP 〈T,A, IC〉 and a goal G, the initial goal or “observation”, abduction is the process of determining a set ∆ of abducible predicates (∆ ⊆ A), such that (the symbol |= denotes entailment): T ∪∆ |= G, and T ∪ IC ∪∆ 6|= ⊥. If such a set exists, we call ∆ an abductive “explanation” for G in 〈T,A, IC〉: 〈T,A, IC〉 abd |= ∆ G Abduction is reasoning in presence of uncertainty, represented as the possible abducibles that can be assumed in order to explain an observation. In this context, the set of integrity constraints IC of an ALP determines the assumptions which can coherently be made together. Informally speaking, IC limits the choice of possible explanations to observations, or in other words, it rules out some hypothetical worlds from those modelled by a given ALP. 2.1 Syntax and Semantics of Integrity Constraints We consider integrity constraints having the following syntax: ic ::= ⊥ ← Body Body ::= Literal [, Literal] Literal ::= Atom | ¬Atom Constraints of the form: ⊥ ← x,¬x are left implicit in the agents’ abductive logic programs, and they can be used to implement default negation. 2 According to [8], negation as default can be recovered into abduction by replacing negated literals of the form ¬ a with a new positive, abducible atom not a and by adding the integrity constraint ← a, not a to IC. 140 Andrea Bracciali and Paolo Torroni Given an integrity constraint ic = ⊥ ← L1, . . . , Ln, we call body(ic) the set of literals {L1, . . . , Ln}. Also, we denote singleton integrity constraints by ic, ic1, ic2, ic′, ic′′, . . . , and sets of integrity constraints by IC, IC1, IC2, . . . , IC′, IC′′, . . . . Finally, we denote sets of literals by ∆, ∆1, ∆2, . . . , ∆′, ∆′′, . . . . In the following, we will adopt the notation ← L1, . . . , Ln as a shorthand for ⊥ ← L1, . . . , Ln. Intuitively, an integrity constraint ic = ← L1, . . . , Ln represents a restriction, preventing L1, . . . , Ln from being all true all at the same time. If some of the Lj in the body of ic are abducible atoms, ic constrains the set of possible explanations that can be produced during an abductive derivation process. Example 2. Let us consider agent A of Example 1. Its abductive program states that something can be considered a bird if either it flies or it has feathers, while it can be considered a mammal if it has hair. Finally, a dolphin is something that swims and has no hair: T =    b ← fe. b ← f. m ← ha. d ← s,¬ha.    A = {f, s, fe, ha} IC = {← b,¬f ← d,¬s } In order to explain its observations, agent A can make assumptions according to its set A, and, for instance, categorise a dolphin by assuming that it is able to swim. Note how abducibles have no definitions. IC prevents A from having a bird that does not fly or a dolphin that does not swim (together with all the implicit integrity constraints). It is clear that there is no ∆ ⊆ A such that T ∪∆ |= b,¬f and T ∪∆ ∪ IC 6|= ⊥, and hence, as supposed in point (1) of Example 1 (in its informal language), A 6|= b,¬f . We consider abductive agents to be agents whose knowledge is represented by an abductive logic program, provided with an abductive entailment operator. At this stage, we do not make assumptions on the underlying operational model. 3 Issues in a constraint revision framework Example 1 has informally introduced some issues which a framework supporting a distributed constraint revision process must address. In this section, we discuss them in more detail. The next section will provide a better technical insight, by presenting some preliminary results about how some of these issues could be formally modeled.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Framework for Knowledge Revision of Abductive Agents Through Their Interaction

In this paper we discuss the design of a knowledge revision framework for abductive reasoning agents, based on interaction. This involves issues such as: how to exploit knowledge multiplicity to find solutions to problems that agents may not individually solve, what information must be passed or requested, how agents can take advantage of the answers that they obtain, and how they can revise th...

متن کامل

A Formalizing Negotiations Using Logic Programming

The paper introduces a logical framework for negotiation among dishonest agents. The framework relies on the use of abductive logic programming as a knowledge representation language for agents to deal with incomplete information and preferences. The paper shows how intentionally false or inaccurate information of agents can be encoded in the agents’ knowledge bases. Such disinformation can be ...

متن کامل

Abductive Expansion: Abductive Inference and the Process of Belief Changey

Unlike the AGM framework of belief revision, where new information is added to the current belief state as it is (while attempting to preserve logical consistency), we model the following more natural proposal: agents seek a justiication (explanation) for the new information before \accepting" it and incorporate this justiication into their current belief state together with the new information...

متن کامل

Abductive Expansion: Abductive Inference and the Process of Belief Change

Unlike the AGM framework of belief revision, where new information is added to the current belief state as it is (while attempting to preserve logical consistency), we model the following more natural proposal: agents seek a justiication (explanation) for the new information before \accepting" it and incorporate this justiication into their current belief state together with the new information...

متن کامل

Interleaving belief revision and reasoning: preliminary report

Most existing work on belief revision and knowledge representation and reasoning tacitly assumes that belief revision is performed off-line, and that reasoning from the beliefs is performed either before or after the beliefs are changed. This imposes that, if a revision occurs while reasoning is performed, reasoning has to be stopped and re-started anew so that the revision is taken into accoun...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003